On Sun, Mar 1, 2009 at 2:58 AM, Kenneth Kalmer <[email protected]> wrote:
> On Sat, Feb 28, 2009 at 6:32 PM, J B <[email protected]> wrote:
>>
>>
>> On Sat, Feb 28, 2009 at 10:21 AM, John Mettraux <[email protected]>
>> wrote:
>>> One major drawback of ruote "on Rails" is that, since it requires
>>> threads for its workqueue, it can't work on passenger for instance.
>>> Multi process mongrel packs require tweaking. It's not a big problem
>>> for small company deployments, but the tendency in the Rails world (as
>>> you know) is to "scale somehow", so :(
>>
>> Have these drawbacks been detailed in another thread? If not, would you
>> care to expound on them here?
>
> Not outside #ruote, and I lost my chat logs recently...
>
> What I discussed with John was my concerns over the ruote engine running
> inside mongrel. We have three scenarios to cope with here:
>
> 1) Small app, single mongrel
> 2) Non-small app, multiple mongrels (single or multiple hosts, doesn't
> matter)
> 3) New-kid-on-the-block Passenger deployments

Hi John, Hi Kenneth,

great reply Kenneth, makes my sunday easier :)

Yes, Passenger seems quite radical with threading :
http://groups.google.com/group/rufus-ruby/browse_frm/thread/8ca374769edf98b5

Life is quite simple for state machines (aasm, workflow) in Rails,
they are simply "request bound", request comes, triggers transition,
resouce reach new state, done. Nothing else to do before next
transition-triggering request.

Ruote, as implemented now, needs threading 1) for its workqueue 2) for
scheduling.

State machines in Rails can be made to deal with 'scheduling' via cron
(or at), see Matt's comment at :
http://stackoverflow.com/questions/349711/ruby-on-rails-state-machines.

Back to ruote, the scheduling aspect is important, many workflow /
business processes use timeouts (this job has to be done within 3
days) or wait time (wait 1 week then proceed with the next phase)...
Ruote is using rufus-scheduler for that. (and Ruote has a plain <cron>
expression : http://openwferu.rubyforge.org/expressions.html#exp_cron).

Ruote's workqueue is essential, a launch request to ruote isn't
executed immediately, the root of the process instance is created and
an "apply" call is placed in the workqueue, and the engine immediately
replies with a pointer (flow expression id) to the root of the process
instance (so that you know the process instance id / workflow instance
id). All the job occurs asynchronously as the worker thread picks
apply/reply call on the workqueue and processes them one by one.

There's a bit of context for that at :
http://jmettraux.wordpress.com/2008/09/07/how-does-ruote-work/ (but
there have been some optimizations since)

Somehow it would be possible to have a multiple-process ruote (if
caching is disabled). But schedulers would compete for expressions, as
Kenneth said, juggling to prioritize schedulers would be required.

Ruote-rest is a nice approach, a web application for ruote alone. You
can think of it as a backend workflow server.

Ruote-web2 is more of a example of a small application (BTW, I've had
the ugly ruote-web running for months at
http://difference.openwfe.org:3000/ without issues, single Mongrel on
a cheap Linode instance, small app, small audience).

(for future versions of ruote, I was thinking about making the
workqueue HTTP driven. applies are posts and replies are merely puts
(cancel are deletes). The application would become [http] client to
itself... But that's just a crazy idea for now...)

I'm looking forward to any help about ruote-rest and ruote-web2.
Kenneth has helped a lot with ruote-rest, I'm looking forward to
continue this collaboration.


Thanks again to Kenneth for his great explanation, best regards,

-- 
John Mettraux   -   http://jmettraux.wordpress.com

--~--~---------~--~----~------------~-------~--~----~
you received this message because you are subscribed to the "ruote users" group.
to post : send email to [email protected]
to unsubscribe : send email to [email protected]
more options : http://groups.google.com/group/openwferu-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to