On Oct 17, 2006, at 8:01 AM, Peter Cousins wrote:
1) Do you think bindings need to advertise whether they use
non-WorkScheduler threading via the manifest or some other
mechanism to
accommodate deployment issues/constraints?
Perhaps.
In our model there is nothing special about bindings - they are
treated by the fabric as normal SCA components (albeit with a
"system" implementation). I think that is consistent as the SCA spec
(the Java one at least) allows applications to create their own
threads so we have these threading issues for all SCA components.
In general we don't want code creating its own threads and so should
provide easy access to the WorkScheduler to encourage them to "do the
right thing." We can enforce this through security but in a trusted
environment you'd want to be able to turn that off to avoid the
performance hit.
2) Is this what you are saying about direct dispatch?
a) bindings should use WorkScheduler to queue all I/O unless not
possible
b) bindings should dispatch using a scheduleWork call on
WorkScheduler.
c) WorkScheduler right now processes these requests without
constraints
d) direct dispatch could be implemented by modifying scheduleWork to
call the work.run() synchronously
I know I am reading between the lines quite a bit hear, so any
clarification
would be appreciated.
My thinking behind the WorkScheduler was that it was for scheduling
work within the server rather than I/O. By work I mean processor
intensive processing frequently involving interaction with
application code that was expecting a synchronous model.
I think we may want to add another scheduler for I/O operations which
is essentially a pool for execution of async I/O completion events.
I would hope the decision on direct dispatch can be made by the
WorkScheduler based on metadata associated with the work.
3) Also, is there any way to set quotas using the WorkScheduler? For
example, I created a binding for files, that uses the WorkScheduler to
monitor inbound directories, and I would like to set some limits on
how
many inbound files can be dispatched on a single endpoint to
prevent one
binding service instance from starving the rest of the system when
many
files arrive at once. Now this can be implemented completely in the
binding logic, but it would be nice if worked consistently and
cooperatively across other bindings (i.e., I wouldn't want my binding
starved by the Axis binding or the JMS binding either). This is
related
to my question about dispatching.
Quotas could be implemented by modifying the WorkScheduler internals,
but I think we would need a common base class that provides more
information about the work item.
I think this all falls into the category of metadata about the work.
Things like priority, load estimation, blocking/non-blocking etc.
should all be defined as part of the work being submitted to allow
the schedule to dispatch appropriately. I've been thinking this would
be part of an interface on the submitted work rather than a base class.
4) If you get rid of the compositeContext, how will unmanaged code do
the locateService?
We really need to define what it means for unmanaged code to interact
with the SCA System. At the moment the Java spec assumes there is a
"current" composite context associated with the thread. Is says
there's some "default" context but does not define which one is
selected.
IMO I think we need to bring the "System" concept into the unmanaged
programming model and provide a way for unmanaged code to "connect"
to a System. We need a concept of System hierarchy to capture the
domains of control (e.g. wiring and/or policy may be constrained to
part of the global System such as within a single process,
implementation, cluster or other federation).
One option here might be to allow unmanaged code to "connect" to a
composite within that system hierarchy. From there it would be able
to locate and use services exposed by the composite.
5) Details on thread affinity:
Suppose I have a legacy transport that I want to create a binding
for.
This transport has thread affinity in that:
a) a message arrives
b) service request executes at application layer
c) application forward request via same transport to another node
If c, does not execute in the same thread as in a, the message
processing cannot be atomic because the commits cannot be
coordinated in
different threads of control.
If b executes in a thread other than T1, commit and rollback cannot
be tied to successful execution of the application logic, because the
commit would have to occur before the thread of control transfers.
Does this make sense?
Yes - it's flow-through as a single unit of work with coordination
between receiver and transmitter.
We'd want to support that for cases where the receive and transmit
are done with different bindings as well.
We don't really have policy support ATM (and this pertains to
transaction policy) but I think its one of the things we need to sort
out soon once we get past M2.
--
Jeremy
---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]