Assaf,

I understand the concepts as you have outlined them and within this
discussion I could use some help mapping them to the PXE "virutal machine"
implementation.

Thanks,

Lance


On 5/11/06, Assaf Arkin <[EMAIL PROTECTED]> wrote:

On 5/11/06, Bill Flood <[EMAIL PROTECTED]> wrote:
>
> Jacob did not solve that as far as it was explained to me -
> concurrency was "achieved" in PXE by "extending" BPEL outside of Jacob
> with proprietary threading infrastructure around the invoke point.
> All that extending was not, again as far as some of us could tell,
> specific to BPEL itself.  While the concurrency discussion is a good
> one in some context, concurrency is a discussion unrelated to Jacob as
> far as I can discern.  I'm trying to keep the rationalization of Jacob
> very clear because it is a very important point.


I believe the reference was made to concurrency within the process. There
are more concurrency issues that are addressed in different parts of the
engine.

Concurrency within the process deals with parallel flows (flow activity,
parallel foreach, event handlers) and the complexity that comes from
synchronization (links, faults, isolated scopes, completion conditions).
The
time it takes to implement a foreach activity is a good measure, since
foreach can execute multiple branches in parallel, yet has several points
for synchronization (completion of all branches, completion conditions,
termination, faults).


Based on other posts I have been under the impression that the JACOB
engine/virtual machine was not creating new JVM threads ( due to
complexities around transaction enlistment, context locking, etc ... ). So
in practice these parallel flows are actually serialized by the PXE virtual
machine? In other words; a single input message/event will use a single JVM
thread of execution within the BPEL "virtual machine". Are these assumptions
correct?

Separately from that, there are threads dedicated to executing activities
and threads dedicated to sending/receiving messages. This architecture
allows some threads to keep executing activities, while other threads are
waiting to send and receive messages. It helps with tuning, since the
activity executing threads are load on the server (CPU, database), while
the
threads sending/receiving messages are I/O bound. Processes that are very
I/O bound will require a lot of send/receive threads, and only a few
execution threads.

That has nothing to do with BPEL, it's just a better architecture for
messaging, especially for supporting low-latency operations. You'll find
the
same behavior in Axis2, .Net and many other modern messaging frameworks.
Right now this is handled by PXE code, but if we switch to Axis2 we would
still prefer to use decoupled sender/receiver threads, we'll just delegate
their lifecycle to Axis.


Yes, and to that end I believe the goal of the API that Maciej is working on
is to abstract the core BPEL "virtual  machine" away from the messaging
architecture.

http://ws.apache.org/sandesha/architecture.html
http://www.onjava.com/pub/a/onjava/2005/07/27/axis2.html?page=2

And of course there's thread management, time scheduling, service
lifecycle,
etc which we delegate to the app server layer.

Assaf



--
CTO, Intalio
http://www.intalio.com

Reply via email to