Rupert Smith wrote:
At a guess, I'd say you are a python programmer? and missing its more
dynamic capabilities...

I guess I'm outed. ;)

I do wonder if running every event through around 2 dynamic switches,
dispatching to handlers looked up in a hashtable, might be a little slow?
Although, I admire the cleverness and neatness of the solution. It is
perfectly possible that it will be fast enough. I might put a little timing
test around one of those switches and find out just how fast it will run
compared with a 'switch' statement.

Of course, some of your switches dispatch based on a short constant, so you
could replace the hash table lookups with real 'switch' statements if need
be.

The Switch class is really just there to keep the stubs concise. As you say if necessary both uses could easily be replaced with a manual switch statement.

That said, I don't believe this would be necessary as currently every incoming frame gets routed through AMQSateManager which itself does two hashtable lookups to find the eventual handler of the frame. The proposed design uses a delegation pattern that accomplishes the same thing as AMQStateManager based purely on method dispatch. This makes the number of hashtable lookups equal in both designs, with the potential to optimize it down to zero in the proposed design.

One idea that springs to mind, looking at this code: Could you make events
self handling?

For example, instead of doing:

handler.handle(event);

what about:

event.setHandler(handler);
event.run();

or:

event.setHandler(handler);
executor.execute(event);

The only reason I suggest this, is so that events become continuations. For
example:

public abstract class Continuation<V> implements Runnable, Callable<V>,
Future<V>
{
   /**
    * Applies the delayed procedure.
    */
   public abstract void run();

   /**
    * Applies the delayed procedure, or throws an exception if unable to do
so.
    *
    * @return The computed result.
    *
    * @throws Exception If unable to compute a result.
    */
   public V call() throws Exception
   {
       execute();

       return get();
   }

   ...

public class Event extends Continuation
...

As events are Runnable, they can make use of
java.util.concurrent.Executorsto run them. A simple executor to do
this immediately is:

   /**
    * A simple executor. Runs the task at hand straight away.
    */
   class ImmediateExecutor implements Executor
   {
       /**
        * Runs the task straight away.
        *
        * @param r The task to run.
        */
       public void execute(Runnable r)
       {
           r.run();
       }
   }

This opens up the possibility of writing some utility code based around
continuations. Some example:

Many events could be batched together into a single containing event that
executes all of its contained events one after the other. Advantage: less
context switching when running a lot of asynchronous events. See Job and
Event in the existing code.

Writing cancellable/interuptable tasks. For example, when a synchronous
request needs to be cancelled and re-sent in the event of failover.

Events, or batches of events can be handled by thread pools. We can start
with one single thread pool, to handle all asynchronous events, then
consider whether splitting into staged pools might confer any advantages.

Asynchronous Executors that take account of priority could be written.

The concept of continuations has been reinvented several times in the
existing code base. It would make sense to refactor and share common code.
Some examples are: FailoverHandler, FailoverSupport, Event, Job,
PoolingFilter, BlockingMethodFrameListener, and I'm sure there are more.

It would definitely make sense to consolidate such things into a single pattern, however I'm not sure it makes sense to introduce continuations at such a low level. My conception of the responsibility of this layer is to accept incoming I/O events and aggregate, decode, and translate into higher level events that are meaningful to the upper domain layers (either client or broker) that use this code.

I would therefore expect the domain layers that use this code to determine the threading model and introduce continuations at that point if it is appropriate for the given event.

That said I'm not sure I fully understand what you're describing, so there may be ways this layer could make it easier for the domain layers that use this code to introduce continuations should they wish to.

--Rafael


Rupert


On 18/07/07, Rafael Schloming <[EMAIL PROTECTED] > wrote:

Here are some stubs I've been working on that describe most of the
communication layer. For those who like dealing directly with code,
please dig in. I will be following up with some UML and a higher level
description tomorrow.

--Rafael

Arnaud Simon wrote:
> Hi,
>
> I have attached a document describing my view on the new 0-10
> implementation. I would suggest that we first implement a 0.10 client
> that we will test against the 0.10 C++ broker. We will then have a
> chance to discuss all together the Java broker design during our Java
> face to face (Rob should organize it in Glasgow later this year).
>
> Basically we have identified three main components:
> - the communication layer that is common to broker and client
> - the Qpid API that is client specific and plugged on the communication
> layer
> - The JMS API that comes on top of the Qpid API
>
> The plan is to provide support for 0.8 and 0.10 by first distinguishing
> the name spaces. Once the 0.10 client is stable we will then be able to
> provide a 0.8 implementation of the Qpid API (based on the existing code

> obviously). This will have the advantage to only support a single JMS
> implementation.
>
> I will send in another thread the QPI API as Rajith and I see it right
> now. Rafael should send more info about the communication layer.
>
> Regards
>
> Arnaud
>



Reply via email to