On Sep 3, 2007, at 5:50 AM, James Strachan wrote:

* Async exchange handling should always be assumed.

I'm not totally sure about this one. One of the reasons JBI is so
complex to work with is that it assumes all the hard stuff is always
the case. e.g. if you want to use transactions, being async is a major
nightmare. One of the major reasons why declarative transactions in
spring are so easy to use is that it assumes a single threaded,
synchronous programming model like servlets / EJB3 which simplifies
the developers job tremendously.

Asyc handling should be assumed because you don't know if it will be synchronous or asynchronous with regard to a given component. Control of that is outside the developers hands.


* Thread semantics need to be cleaned up. I believe all built in
components should be single-thread oriented (ie, only one message at
a time coming out).

Isn't that conflicting with the previous async comment? :)

Not at all. Saying the built in components should produce messages on a single thread is completely in line. This is that there should be one, and only one thread, pushing messages out of a component in the common case. Oddly enough, this allows for synchronous reasoning.

It all depends really; there's many different use cases, so its kinda
hard to be too sweeping. e.g. folks might want to use efficient,
parallel consumption of JMS messages with Camel; using Spring's JMS
MessageListenerContainers in the component/endpoints, which do pooling
and support concurrent message dispatch.

Some things should support concurrent dispatch. The default should be single threaded dispatch though, as it is a clearer baseline. Even JMS can be made single threaded (receive() in a loop) very easily, but have a flag to make concurrent.

I would definitely agree that one of the responsibilities of a
component/endpoint is to clearly define its threading semantics so
users can understand the threading model; its just I can see different
users wanting different threading requirements.

Yes, different users *will* want different things, which is why it needs the simplest baseline by default. If a user, for some godawful reason ;-), wants totally serialized processing from end to end, it should be achievable.

* Corollary -- the seda component should go away and be replaced with
a thread pool policy. This should be done as a policy/interceptor as
it is not an endpoint unto itself, it just controls how messages make
their way between. Given:

        from("file:foo").threadPool(10).to("wombat:nugget")
          or
        from("file:foo").threads(10).to("wombat:nugget")

FWIW you could do the same with URIs too...

from("file:foo").to("seda:mythreadpool?size=10").to("wombat:nugget")

i.e. use a URI to define named thread pools of different sizes &
configurations (min/max size etc)

I think you missed my point on this one. It isn't to just provide a fluent api method for a thread pool, hck that is beside the point and was merely fr (it turns out bad) example. What is presently the SEDA endpoint should cease to be an endpoint and become a policy. The reason has to do with an "exchange" being between two endpoints and when you consider it to be complete.

If, hypothetically, losing a message to do a process crash is not okay, you need to be able to make sure all components have some kind of persistence built into them. For a lot of components this is obvious -- JMS, File, JPA, etc, for others not so much. As soon as you throw in the current seda component though, you have a big memory only message-loss buffer waiting for the JVM to croak. By making it a policy nothing is lost in terms of flexibility, but you regain the ability to do guaranteed delivery between components while having that delivery be asynchronous inside the VM.

Its worth noting too that some components need to deeply control
thread pools; e.g. the Spring based MessageListenerContainer stuff for
pooled JMS / JCA consuption, the thread pooling and the
endpoint/component are deeply entwined & its not that easy or useful
to separate the threading.

Yes, but in many cases it is useful and necesary.

Though I agree it might be nice to enhance the DSL with threading
semantics; similarly we might want to be able to define pooling
semantics for processors/transformers when used in a highly concurrent
route.

Hmm, I had been presuming processors and transformers needed to be threadsafe and were accessible concurrently. This, or using processor/ transformer per invocation would better match the cases folks are familiar with in Jarvar land, I believe.

  The file component would push to a blocking queue which ten threads
pull from and pass on to "wombat:nugget" This provides clear control
over how threads are allocated and used to the developer -- a Good
Thing(r).

Yeah - though the seda equivalent (assuming the seda component is
sufficiently documented) could also be just as clear I'd hope?

No, because it inserts an artificial endpoint into the middle where it is really modifying the delivery process, not where the message is going. How you get to the endpoint is different than what endpoint you are going to. Conflating How with Where will make reasoning about things like guaranteed delivery, documenting routes, verifying correctness with domain experts("ah, see, the seda endpoint isn't really an endpoint, it is like a multiplexer... no, wait, there is only the one message still, see SEDA is this technique... yes, I know, you don't care about the plumbing details sorry"), etc, much harder :-)

-Brian

Reply via email to