Going back on some of the points here...
On Jan 29, 2008 2:25 PM, Mike Heath <[EMAIL PROTECTED]> wrote:

>
> Connecting - Connecting is done as a blocking operation.  In Jeff
> Geneder's AHC branch in the Geronimo sandbox, thread pools are being
> used for asynchronous connecting.  This is unfortunate since MINA
> already has this functionality and does it in a much lighter weight
> manner than using a thread pool.


I'm a little confused as to what you mean when you say "connecting is done
as a blocking operation".  You are not saying AHC's connect is done as a
blocking operation, right? :)  As for the thread pool, I thought Mina's
socket connector involves a thread pool (Executor) one way or the other, no?
 Is there a way to use connectors without involving a thread pool (whether
the caller supplies one or the socket connector constructors creates one)?



>
>
> Completion Notification - With the existing AHC, there's a single
> callback for the Client.  I REALLY like the observable future pattern
> that MINA uses.  With each asynchronous operation, a future object is
> returned.  This future object can be used to block until the operation
> completes.  The future is also observable so you can also register one
> or more completion listeners with the future.  This makes it real easy
> to do a fork/join like operation like:
>
> future1 = doAsynch1();
> future2 = doAsynch2();
> future3 = doAsynch3();
>
> future1.await()
> future2.await()
> future3.await()
>
> or use an event driven approach like:
>
> doAsynch1().addListener(...);
> doAsynch2().addListener(...);
> doAsynch3().addListener(...);
>
> This provides maximum flexibility.  This should be incorporated into
> AsyncWeb client.


If you look at the current AHC code, it actually *does* use both future
(ResponseFuture) and a callback (AsyncHttpClientCallback).  Both correspond
to the future and the future listener, so it ended up being something very
similar to what mina's future does.  It would take a trivial refactoring to
reshape it to look like Mina's future.

Another thing it supports is a completion queue.  One can fire multiple
non-blocking send() calls to multiple URLs, and sit on the completion queue
to handle the results as they arrive.  Although callers can write their own
code to do things like this, supporting it at the API level would be a nice
thing to keep.  This comes in pretty handy in a scatter-and-gather
situation...

My 2 cents...

Thanks,
Sangjin

Reply via email to