I agree the proposal for BLOCKING, ASYNC, RELIABLE, TRANSACTIONAL (BART) is
a better way to go, and will make the WorkQueue idea irrelevant.    I was
looking for an incremental change that could be applied to the trunk, while
we're waiting for the BART branch to materialize and stabilize.

Regardless, I think allowing the thread pool to be shared with the IL is a
good thing.  It means less threads in the system and better resilience to
load.  So for the ASYNC case, I hope the ILs can use the shared thread pool
whenever possible.

alex


On 6/8/07, Maciej Szefler <[EMAIL PROTECTED]> wrote:

That strikes me addressing the issue at the wrong level in the
code---if we wants things to happen in one thread, then the engine
should just do them in one thread, i.e. not call scheduler until it
has given up on the thread. Introducing a new concept (work queue)
that is shared between the engine and integration layer would be
confusing... its bad enough that the IL uses the scheduler, which it
really should not.

-mbs

On 6/8/07, Alex Boisvert <[EMAIL PROTECTED]> wrote:
> As a first step, I was thinking of allowing the composition of work that
is
> currently done in several unrelated threads into a single thread, by
> introducing a WorkQueue
>
> Right now we have code in the engine, such as
> org.apache.ode.axis2.ExternalService.invoke() -> afterCompletion() that
uses
> ExecutorService.submit(...) and I'd like to convert this into
> WorkQueue.submit().
>
> For example, this means that org.apache.ode.axis2.OdeService would first
> execute the transaction around odeMex.invoke() and after commit it would
> dequeue and execute any pending items in the WorkQueue.  We would also
need
> to do the same in BpelEngineImpl.onScheduledJob() and other similar
engine
> entrypoints.
>
> The outcome of this is that we could execute all the "non-blocking" work
> related to an external event in a single thread, if desired.   Depending
on
> the WorkQueue implementation, we could have pure serial processing,
parallel
> processing (like now), or even a mix in-between (e.g. limiting
concurrent
> processing to N threads for a given instance).   This would allow for
> optimizing response time or throughput based on the engine policy, or if
we
> want to get sophisticated, by process model.
>
> I think this change is relatively straightforward that it could happen
in
> the trunk without disrupting it.
>
> Thoughts?
>
> alex

Reply via email to